Déjà Image-Captions: A Corpus of Expressive Descriptions in Repetition
نویسندگان
چکیده
We present a new approach to harvesting a large-scale, high quality image-caption corpus that makes a better use of already existing web data with no additional human efforts. The key idea is to focus on Déjà Image-Captions: naturally existing image descriptions that are repeated almost verbatim – by more than one individual for different images. The resulting corpus provides association structure between 4 million images with 180K unique captions, capturing a rich spectrum of everyday narratives including figurative and pragmatic language. Exploring the use of the new corpus, we also present new conceptual tasks of visually situated paraphrasing, creative image captioning, and creative visual paraphrasing.
منابع مشابه
Generating Image Descriptions using Multilingual Data
In this paper we explore several neural network architectures for the WMT 2017 multimodal translation sub-task on multilingual image caption generation. The goal of the task is to generate image captions in German, using a training corpus of images with captions in both English and German. We explore several models which attempt to generate captions for both languages, ignoring the English outp...
متن کاملTREETALK: Composition and Compression of Trees for Image Descriptions
We present a new tree based approach to composing expressive image descriptions that makes use of naturally occuring web images with captions. We investigate two related tasks: image caption generalization and generation, where the former is an optional subtask of the latter. The high-level idea of our approach is to harvest expressive phrases (as tree fragments) from existing image description...
متن کاملUsing the Visual Denotations of Image Captions for Semantic Inference
Semantic inference is essential to natural language understanding. There are two different traditional approaches to semantic inference. The logic-based approach translates utterances into a formal meaning representation that is amenable to logical proofs. The vector-based approach maps words to vectors that are based on the contexts in which the words appear in utterances. Real-valued similari...
متن کاملModel Summaries for Location-related Images
At present there is no publicly available data set to evaluate the performance of different summarization systems on the task of generating location-related extended image captions. In this paper we describe a corpus of human generated model captions in English and German. We have collected 932 model summaries in English from existing image descriptions and machine translated these summaries in...
متن کاملFrom image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions
We propose to use the visual denotations of linguistic expressions (i.e. the set of images they describe) to define novel denotational similarity metrics, which we show to be at least as beneficial as distributional similarities for two tasks that require semantic inference. To compute these denotational similarities, we construct a denotation graph, i.e. a subsumption hierarchy over constituen...
متن کامل